91 research outputs found

    Individual differences in visual salience vary along semantic dimensions

    Get PDF
    What determines where we look? Theories of attentional guidance hold that image features and task demands govern fixation behavior, while differences between observers are interpreted as a “noise-ceiling” that strictly limits predictability of fixations. However, recent twin studies suggest a genetic basis of gaze-trace similarity for a given stimulus. This leads to the question of how individuals differ in their gaze behavior and what may explain these differences. Here, we investigated the fixations of >100 human adults freely viewing a large set of complex scenes containing thousands of semantically annotated objects. We found systematic individual differences in fixation frequencies along six semantic stimulus dimensions. These differences were large (>twofold) and highly stable across images and time. Surprisingly, they also held for first fixations directed toward each image, commonly interpreted as “bottom-up” visual salience. Their perceptual relevance was documented by a correlation between individual face salience and face recognition skills. The set of reliable individual salience dimensions and their covariance pattern replicated across samples from three different countries, suggesting they reflect fundamental biological mechanisms of attention. Our findings show stable individual differences in salience along a set of fundamental semantic dimensions and that these differences have meaningful perceptual implications. Visual salience reflects features of the observer as well as the image

    Chromatic Illumination Discrimination Ability Reveals that Human Colour Constancy Is Optimised for Blue Daylight Illuminations

    Get PDF
    The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed

    The influence of color on snake detection in visual search in human children

    Get PDF
    It is well known that adult humans detect snakes as targets more quickly than flowers as the targets and that how rapidly they detect a snake picture does not differ whether the images are in color or gray-scale, whereas they find a flower picture more rapidly when the images are in color than when the images are gray-scale. In the present study, a total of 111 children were presented with 3-by-3 matrices of images of snakes and flowers in either color or gray-scale displays. Unlike the adults reported on previously, the present participants responded to the target faster when it was in color than when it was gray-scale, whether the target was a snake or a flower, regardless of their age. When detecting snakes, human children appear to selectively attend to their color, which would contribute to the detection being more rapidly at the expense of its precision

    The relative contribution of shape and colour to object memory

    Get PDF
    The current studies examined the relative contribution of shape and colour in object representations in memory. A great deal of evidence points to the significance of shape in object recognition, with the role of colour being instrumental under certain circumstances. A key but yet unanswered question concerns the contribution of colour relative to shape in mediating retrieval of object representations from memory. Two experiments (N=80) used a new method to probe episodic memory for objects and revealed the relative contribution of colour and shape in recognition memory. Participants viewed pictures of objects from different categories, presented one at a time. During a practice phase, participants performed yes/no recognition with some of the studied objects and their distractors. Unpractised objects shared shape only (Rp–Shape), colour only (Rp–Colour), shape and colour (Rp–Both), or neither shape nor colour (Rp–Neither), with the practised objects. Interference effects in memory between practised and unpractised items were revealed in the forgetting of related unpractised items – retrieval-induced forgetting. Retrieval-induced forgetting was consistently significant for Rp–Shape and Rp–Colour objects. These findings provide converging evidence that colour is an automatically encoded object property, and present new evidence that both shape and colour act simultaneously and effectively to drive retrieval of objects from long-term memory

    Human Young Children as well as Adults Demonstrate ‘Superior’ Rapid Snake Detection When Typical Striking Posture Is Displayed by the Snake

    Get PDF
    Humans as well as some nonhuman primates have an evolved predisposition to associate snakes with fear by detecting their presence as fear-relevant stimuli more rapidly than fear-irrelevant ones. In the present experiment, a total of 74 of 3- to 4-year-old children and adults were asked to find a single target black-and-white photo of a snake among an array of eight black-and-white photos of flowers as distracters. As target stimuli, we prepared two groups of snake photos, one in which a typical striking posture was displayed by a snake and the other in which a resting snake was shown. When reaction time to find the snake photo was compared between these two types of the stimuli, its mean value was found to be significantly smaller for the photos of snakes displaying striking posture than for the photos of resting snakes in both the adults and children. These findings suggest the possibility that the human perceptual bias for snakes per se could be differentiated according to the difference of the degree to which their presence acts as a fear-relevant stimulus

    Grasping isoluminant stimuli

    Get PDF
    We used a virtual reality setup to let participants grasp discs, which differed in luminance, chromaticity and size. Current theories on perception and action propose a division of labor in the brain into a color proficient perception pathway and a less color-capable action pathway. In this study, we addressed the question whether isoluminant stimuli, which provide only a chromatic but no luminance contrast for action planning, are harder to grasp than stimuli providing luminance contrast or both kinds of contrast. Although we found that grasps of isoluminant stimuli had a slightly steeper slope relating the maximum grip aperture to disc size, all other measures of grip quality were unaffected. Overall, our results do not support the view that isoluminance of stimulus and background impedes the planning of a grasping movement

    Evolution and Optimality of Similar Neural Mechanisms for Perception and Action during Search

    Get PDF
    A prevailing theory proposes that the brain's two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways

    Selective Processing of Multiple Features in the Human Brain: Effects of Feature Type and Salience

    Get PDF
    Identifying targets in a stream of items at a given constant spatial location relies on selection of aspects such as color, shape, or texture. Such attended (target) features of a stimulus elicit a negative-going event-related brain potential (ERP), termed Selection Negativity (SN), which has been used as an index of selective feature processing. In two experiments, participants viewed a series of Gabor patches in which targets were defined as a specific combination of color, orientation, and shape. Distracters were composed of different combinations of color, orientation, and shape of the target stimulus. This design allows comparisons of items with and without specific target features. Consistent with previous ERP research, SN deflections extended between 160–300 ms. Data from the subsequent P3 component (300–450 ms post-stimulus) were also examined, and were regarded as an index of target processing. In Experiment A, predominant effects of target color on SN and P3 amplitudes were found, along with smaller ERP differences in response to variations of orientation and shape. Manipulating color to be less salient while enhancing the saliency of the orientation of the Gabor patch (Experiment B) led to delayed color selection and enhanced orientation selection. Topographical analyses suggested that the location of SN on the scalp reliably varies with the nature of the to-be-attended feature. No interference of non-target features on the SN was observed. These results suggest that target feature selection operates by means of electrocortical facilitation of feature-specific sensory processes, and that selective electrocortical facilitation is more effective when stimulus saliency is heightened

    The Spatial and Temporal Construction of Confidence in the Visual Scene

    Get PDF
    Human subjects can report many items of a cluttered field a few hundred milliseconds after stimulus presentation. This memory decays rapidly and after a second only 3 or 4 items can be stored in working memory. Here we compared the dynamics of objective performance with a measure of subjective report and we observed that 1) Objective performance beyond explicit subjective reports (blindsight) was significantly more pronounced within a short temporal interval and within specific locations of the visual field which were robust across sessions 2) High confidence errors (false beliefs) were largely confined to a small spatial window neighboring the cue. The size of this window did not change in time 3) Subjective confidence showed a moderate but consistent decrease with time, independent of all other experimental factors. Our study allowed us to asses quantitatively the temporal and spatial access to an objective response and to subjective reports
    corecore